continuous hierarchical representation
- North America > United States (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- (5 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- North America > United States (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- (5 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Reviews: Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
It uses ideas similar to very recent/concurrent work (Ganea et al., 2018; Ovinnikov, 2018; Nagano et al., 2019), but it is made clear how this work differs from related work. Quality: The submission seems technically sound, with detailed experimental results. The paper empirically compares their approach mostly with their Euclidean counterpart. This is fair, of course, but it would be interesting to see how it compares empirically with the Poincaré Wasserstein Autoencoder (Ovinnikov, 2019) and the hyperboloid model of Nagano et al. (2019), like do they yield similar latent representations, how are the respective sample qualities? The background on Riemannian geometry is to the point, so that the paper is in most parts accessible to readers without training in non-Euclidean geometry. Nevertheless, I feel that readers could benefit from more high-level guidance in Appendix B, like what do we learn from Section B.8 and B.9? -Significance: I feel that this is a significant work and others can build on these ideas either methodologically or experimentally.
Reviews: Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
This paper examines an alternative latent space, with sensible ablation studies, and sensible proposals for modifying the rest of the architecture to match. Our main complaint is that the paper lacks some empirical comparison with very recent related work (Ovinnikov, 2019, Nagano et al., 2019). However, even without such a comparison, we think it is still a complete and interesting paper.
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
The Variational Auto-Encoder (VAE) is a popular method for learning a generative model and embeddings of the data. Many real datasets are hierarchically structured. We therefore endow VAEs with a Poincaré ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures.
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
Mathieu, Emile, Lan, Charline Le, Maddison, Chris J., Tomioka, Ryota, Teh, Yee Whye
The Variational Auto-Encoder (VAE) is a popular method for learning a generative model and embeddings of the data. Many real datasets are hierarchically structured. We therefore endow VAEs with a Poincaré ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures. Papers published at the Neural Information Processing Systems Conference.